Skip to content

[feature] [sub feature 3] Support qwen Image edit infer with gedit dataset #150

Open
SJTUyh wants to merge 5 commits intoAISBench:masterfrom
SJTUyh:edit_dev
Open

[feature] [sub feature 3] Support qwen Image edit infer with gedit dataset #150
SJTUyh wants to merge 5 commits intoAISBench:masterfrom
SJTUyh:edit_dev

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Feb 13, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@SJTUyh SJTUyh changed the title [feature] [sub feature 2] Support qwen Image edit infer with gedit dataset [feature] [sub feature 3] Support qwen Image edit infer with gedit dataset Feb 13, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @SJTUyh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the benchmarking capabilities by introducing full support for the Qwen Image Edit model, particularly for image editing tasks on the gedit dataset. It establishes a robust framework for LLM-based evaluation of model outputs and incorporates sophisticated distributed parallelism mechanisms to optimize performance. Additionally, it refactors output handling to seamlessly accommodate multimodal results, ensuring comprehensive and accurate assessment of image editing models.

Highlights

  • New Feature: Qwen Image Edit Inference: Added comprehensive support for the Qwen Image Edit model, including its integration into the benchmarking framework and specific configurations for the gedit dataset.
  • LLM Judge Integration: Introduced a JudgeInfer worker and new dataset types (BaseJDGDataset, LLMJudgeDataset, Aime2025JDGDataset, GEditJDGDataset) to enable LLM-based evaluation of model predictions.
  • Distributed Parallelism Enhancements: Integrated advanced distributed parallelism utilities, including all_to_all communication, GroupCoordinator for process group management, and ParallelConfig for sequence and classifier-free guidance parallelism, specifically for the Qwen Image Edit model.
  • Multimodal Output Handling: Implemented a new LMMOutput class and LMMGenInferencerOutputHandler to effectively manage and save multimodal outputs (both text and images) generated by Large Multimodal Models.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/cli/workers.py
    • Imported os, shutil, PredictionInvalidException, TMAN_CODES, load_jsonl, and dump_jsonl.
    • Added the JudgeInfer class to manage LLM-based inference tasks.
    • Modified the Eval class to incorporate judge_infer_cfg and post-process results.
    • Updated the WORK_FLOW dictionary to include JudgeInfer in the all and eval modes.
  • ais_bench/benchmark/configs/datasets/aime2025/aime2025_gen_0_shot_llmjudge.py
    • Added a new configuration file for the AIME2025 dataset to support LLM judge inference.
  • ais_bench/benchmark/configs/datasets/gedit/gedit_gen.py
    • Added a new configuration file for the GEdit dataset to enable LMM generation inference.
  • ais_bench/benchmark/configs/models/lmm_models/qwen_image_edit.py
    • Added a new model configuration file for the Qwen Image Edit model.
  • ais_bench/benchmark/datasets/aime2025.py
    • Imported LLMJudgeDataset for judge functionality.
    • Added the Aime2025JDGDataset class to support LLM judge functionality for AIME2025.
  • ais_bench/benchmark/datasets/base.py
    • Imported Type for type hinting.
    • Added the BaseJDGDataset class to serve as a base for LLM judge datasets.
  • ais_bench/benchmark/datasets/g_edit.py
    • Added GEditEvaluator for scoring GEdit predictions.
    • Added GEditDataset for loading and processing the GEdit dataset.
    • Added GEditJDGDataset for LLM judge functionality with the GEdit dataset.
  • ais_bench/benchmark/datasets/utils/datasets.py
    • Corrected minor formatting issues in docstrings by removing unnecessary newlines.
  • ais_bench/benchmark/datasets/utils/llm_judge.py
    • Added a new file to define get_a_or_b post-processor, LLMJudgeDataset, and LLMJudgeCorrectEvaluator for LLM-based judging.
  • ais_bench/benchmark/models/init.py
    • Imported the newly added QwenImageEditModel.
  • ais_bench/benchmark/models/local_models/base.py
    • Renamed the _generate method to generate in BaseModel.
    • Removed the generate_from_template method.
    • Added BaseLMModel with a generate method that raises AISBenchNotImplementedError.
  • ais_bench/benchmark/models/local_models/qwen_image_edit_mindie_sd.py
    • Added a new file implementing the QwenImageEditModel for Qwen Image Edit inference.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/attn_layer.py
    • Added a new file implementing xFuserLongContextAttention_new4 for attention processing within the Qwen model.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/distributed/all_to_all.py
    • Added a new file implementing all_to_all_4D and SeqAllToAll4D for distributed tensor communication.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/distributed/group_coordinator.py
    • Added a new file implementing GroupCoordinator and SequenceParallelGroupCoordinator for managing distributed process groups.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/distributed/parallel_mgr.py
    • Added a new file implementing ParallelConfig and functions for initializing and managing model parallel groups.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/distributed/utils.py
    • Added a new file with utility functions for generating masked orthogonal rank groups.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/pipeline_qwenimage_edit_plus.py
    • Added a new file implementing the QwenImageEditPlusPipeline for the Qwen Image Edit diffusion pipeline.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/scheduling_flow_match_euler_discrete.py
    • Added a new file implementing FlowMatchEulerDiscreteScheduler for diffusion timestep management.
  • ais_bench/benchmark/models/local_models/qwenimage_edit/transformer_qwenimage.py
    • Added a new file implementing QwenImageTransformer2DModel and related components for the Qwen Image Edit transformer architecture.
  • ais_bench/benchmark/models/output.py
    • Imported os, Image, and Union for multimodal output handling.
    • Introduced the LMMOutput class for handling multimodal outputs, including images and text.
  • ais_bench/benchmark/openicl/icl_inferencer/icl_lmm_gen_inferencer.py
    • Added a new file implementing LMMGenInferencer for multimodal generation inference.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/base_handler.py
    • Added the data_abbr parameter to the get_prediction_result and get_result methods.
    • Adjusted run_cache_consumer to correctly pass the data_abbr parameter.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/bfcl_v3_output_handler.py
    • Added the data_abbr parameter to the get_prediction_result method.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/gen_inferencer_output_handler.py
    • Added the data_abbr parameter to the get_prediction_result method.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/lmm_gen_inferencer_output_handler.py
    • Added a new file implementing LMMGenInferencerOutputHandler for handling LMM generation outputs.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/ppl_inferencer_output_handler.py
    • Added the data_abbr parameter to the get_prediction_result method.
  • ais_bench/benchmark/openicl/icl_prompt_template/icl_prompt_template_mm.py
    • Corrected minor formatting in the check_mm_template and generate_item methods.
  • ais_bench/benchmark/utils/file/file.py
    • Imported mmap and orjson for efficient file operations.
    • Implemented load_jsonl and dump_jsonl functions for reading and writing JSONL files.
  • ais_bench/benchmark/utils/image_process.py
    • Added a new file with the pil_to_base64 utility function for image encoding.
  • ais_bench/configs/lmm_exmaple/multi_device_run_qwen_image_edit.py
    • Added a new configuration file for running the Qwen Image Edit model across multiple devices.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality for image editing with the Qwen model and a judging mechanism for evaluating model outputs. However, a critical insecure deserialization vulnerability using pickle.loads in the distributed coordinator could lead to remote code execution. Additionally, several instances of path traversal were found in the worker post-processing logic, which could allow for arbitrary file deletion or overwrite of .jsonl files. Beyond these security concerns, I've identified areas for improvement related to robustness, code clarity, and best practices, including making task merging more reliable, ensuring file operations are atomic, and improving code consistency by removing mixed-language comments. These issues should be addressed before merging.

I am having trouble creating individual review comments. Click here to see my feedback.

ais_bench/benchmark/models/local_models/qwenimage_edit/distributed/group_coordinator.py (411)

security-critical critical

The recv_object method uses pickle.loads to deserialize data received from other ranks in a distributed environment. pickle is inherently insecure and can be exploited to execute arbitrary code if an attacker can control the data sent over the distributed communication channel. While this occurs in a distributed environment, using pickle for network communication is a significant security risk, especially if any node in the cluster is compromised, as it allows for lateral movement and remote code execution (RCE) across the cluster.

Recommendation: Replace pickle with a secure serialization format such as json or orjson. If complex Python objects must be transferred, consider using a safer alternative or implementing strict validation of the deserialized data.

ais_bench/benchmark/models/local_models/base.py (60)

high

The method signature for generate has been changed from _generate(self, input, ...) to generate(self, inputs, ...). However, the base class BaseModel still has an abstract method _generate. This should be changed to generate to match the new signature in BaseLMModel and avoid potential NotImplementedError issues in subclasses.

ais_bench/benchmark/models/local_models/qwen_image_edit_mindie_sd.py (77)

high

The DEFAULT_NUM_INFERENCE_STEPS is set to 1, with the original value of 40 commented out. A single inference step is unusually low for a diffusion model and will likely produce very low-quality images. This might be for quick testing, but it's a risky default. If this is for debugging, consider making it more explicit or using a higher default value.

ais_bench/benchmark/cli/workers.py (218-223)

security-medium medium

This section is vulnerable to path traversal, as file paths are constructed using unsanitized model and dataset abbreviations (abbr). An attacker could use path traversal sequences (e.g., ../) to delete arbitrary .jsonl files outside the intended directory. Additionally, removing the original prediction file before writing the new one is risky; if the dump_jsonl operation fails, the original data will be lost. Consider sanitizing abbr values and using atomic file operations (write to a temporary file then move) to prevent both path traversal and data loss.

            tmp_judge_org_prediction_path = judge_org_prediction_path + '.tmp'
            dump_jsonl(judge_preds, tmp_judge_org_prediction_path)
            shutil.move(tmp_judge_org_prediction_path, judge_org_prediction_path)

ais_bench/benchmark/cli/workers.py (286-293)

security-medium medium

The Eval worker constructs file paths using unsanitized model and dataset abbreviations, leading to a path traversal vulnerability. This could allow overwriting arbitrary .jsonl files via shutil.copy. It's crucial to sanitize all abbr values and restrict paths to the intended output directory. Additionally, please translate the Chinese comment on line 292 to English for consistency and readability.

                    # Copy the file from cur_results_path to final_org_results_path

ais_bench/benchmark/cli/workers.py (176-182)

medium

Using str() on a dictionary to generate a key for grouping is not robust. The string representation of a dictionary is not guaranteed to be consistent if the key order changes. A more reliable method is to use json.dumps with sort_keys=True to create a canonical string representation of the dictionary.

            key = (
                task["models"][0]["abbr"] # same model
                + "_"
                + str(task['datasets'][0][0]['type']) # same dataset type
                + "_"
                + json.dumps(task["datasets"][0][0]["infer_cfg"]["inferencer"], sort_keys=True) # same inferencer with the same args
            )

ais_bench/benchmark/models/local_models/base.py (136-146)

medium

The generate_from_template method has been removed. While it might not be used currently, this could be a breaking change for other parts of the codebase that might rely on it. Please ensure this removal is intentional and all call sites have been updated. If it's no longer needed, this is fine, but it's a significant removal worth double-checking.

ais_bench/benchmark/models/local_models/qwen_image_edit_mindie_sd.py (223-224)

medium

There are print statements here and in other places in this file (e.g., line 250). These should be replaced with proper logging using self.logger for better log management, especially to control log levels and output destinations in production environments. Print statements can clutter the output and are hard to disable.

        self.logger.debug(f"in _generate")
        self.logger.debug(f"输入: {input}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments